# Efficient Language Model
Openelm 1 1B
OpenELM is a series of efficient language models introduced by Apple, utilizing a hierarchical scaling strategy to optimize parameter allocation, offering pretrained and instruction-tuned models ranging from 270M to 3B parameters.
Large Language Model
Transformers

O
apple
683
31
RWKV7 Goose World3 1.5B HF
Apache-2.0
The RWKV-7 model in flash-linear attention format, supporting English text generation tasks.
Large Language Model English
R
RWKV
70
2
Openelm 450M Instruct
OpenELM is a set of open-source efficient language models that employ a hierarchical scaling strategy to optimize parameter allocation, including pre-trained and instruction-tuned versions ranging from 270 million to 3 billion parameters.
Large Language Model
Transformers

O
apple
114.41k
47
Openelm 450M
OpenELM is a set of open, efficient language models that employ a hierarchical scaling strategy to optimize parameter allocation and improve model accuracy. It offers pretrained and instruction-tuned versions ranging from 270 million to 3 billion parameters.
Large Language Model
Transformers

O
apple
857
26
Retnet 410m XATL
Apache-2.0
A linear computational cost inference model based on RetNet architecture hybridized with Transformer, achieved through cross-architecture transfer learning
Large Language Model
Transformers English

R
NucleusAI
347
2
Tinybert L 4 H 312 V2 Finetuned Wikitext103
This model is a fine-tuned version of TinyBERT_L-4_H-312_v2 on the wikitext dataset, primarily used for text-related tasks.
Large Language Model
Transformers

T
saghar
20
0
Featured Recommended AI Models